The basic task in BART is to pump up a virtual balloon using on-screen buttons. With each pump, the balloon grows a bit and the player gains a point, which are linked to monetary rewards — the more the players pump up the balloons, the higher their payoff. The maximum size of a balloon is reached after 128 pumps. The risk is introduced through a random, uniformly distributed, point of explosion for each balloon with the average and median explosion point at 64 pumps. The optimal strategy to maximise payoff is to perform 64 pumps. Each participant repeats this 30 times.
BART tasks have been commonly used in psychology research to assess risk-taking behavior. A meta-analysis of 22 studies which used the BART task found that the average number of pumps (averaged across conditions) to vary between 24.60 to 44.10 (out of 128 total possible pumps), with a weighted standard deviation of 5.93. This means that based on prior studies, on average, participants in the BART task are most likely to be risk-averse.
The data will be analysed using a poisson regression model: the outcome variable will be the number of pumps by the participant, and the predictor will be a (categorical) dummy variable indicating which condition the participant is in.
\[ pumps_i \sim Poisson(\lambda_i) \\ log(\lambda_i) \sim \alpha + \beta \times condition_i \] where, \(condition_i\) has two levels: 0 for the constrictive condition, and 1 for the expansive condition.
Below, we show three different visualizations of the probability densities for the prior distribution on the intercept parmaeter, \(\alpha\):
The probability density of the distribution specified as the prior of the intercept parameter.
In models which use a non-linear transformation, such as Poisson models (which uses a log transformation), the outcome variable (here, \(pumps_i\)) and the predictors are on different scales. Thus a one unit change in the predictor variable will have a non-linear effect on the outcome variable. Transforming the parameters to the response scale (i.e. by performing the same non-linear transformation), tells you the effect of a one unit change of the parameter on the outcome variable.
A prior predictive distribution is obtained by using the priors that you have chosen for each parameter in the model to generate the data. It incorporates the information from all the parameters, and not just the current parameter. Thus, prior predictive distributions tells you what the data (here, the number of pumps) might look like based on your assumptions before looking at the data. The visualization shows the prior predictive distribution — the density for the number of pumps in each condition for 20 hypothetical experiments. Assume that sensible priors have been chosen for the mean difference parameter, \(\beta\).
Constrictive
Expansive